After completing this lesson, you’ll be able to:
The provincial government has given the city a grant to fund new public art in parks.
A colleague has created a workspace to analyze the amount of art in each city park, and we are carrying out a code review to ensure that the workspace is efficient and well-designed.
Start FME Workbench (2025.1 or later) and open the starting workspace.
Reading from the web can introduce variability that makes it difficult to assess workspace performance accurately. Therefore, this workspace uses local paths to C:\FMEData. If you do not have FMEData on your machine, you can download the source data (linked in the Resources section above) and point the workspace to use the local files.
Your colleague ran the workspace with caching turned on and informed you it took about twenty seconds to run. Try running it yourself with data caching enabled and the default parameters. Note the total runtime. It should look something like this:
2025-07-10 15:40:11| 15.4| 0.0|INFORM|Translation was SUCCESSFUL with 9 warning(s) (70 feature(s) output)
2025-07-10 15:40:11| 15.4| 0.0|INFORM|FME Session Duration: 16.4 seconds. (CPU: 11.1s user, 4.3s system)
2025-07-10 15:40:11| 15.4| 0.0|INFORM|END - ProcessID: 35068, peak process memory usage: 574780 kB, current process memory usage: 156892 kB
Now, to test production performance, turn off data caching. There are no Inspector or Logger transformers, and no parts are disabled. Re-run the workspace.
You should find that the workspace runs a lot more quickly this time - perhaps in as little as five seconds:
2025-07-10 15:39:19| 5.1| 0.0|INFORM|Translation was SUCCESSFUL with 2 warning(s) (70 feature(s) output)
2025-07-10 15:39:19| 5.1| 0.0|INFORM|FME Session Duration: 5.7 seconds. (CPU: 3.1s user, 2.0s system)
2025-07-10 15:39:19| 5.1| 0.0|INFORM|END - ProcessID: 68592, peak process memory usage: 444516 kB, current process memory usage: 154644 kB
This clearly shows the performance reduction that caching can cause. In these examples, it runs without caching, resulting in a 54% decrease in total runtime. This is particularly true in a workspace processing large raster datasets. Caching shouldn't be required in a production workspace because the authoring and debugging phases of development should be complete.
Although the workspace now runs very quickly, we could assess the relative performance of each section anyway.
Firstly, disable all components after the readers and run the workspace. Make a note of the time the workspace now takes to run (still without caching):
2025-07-10 15:38:27| 1.7| 0.0|INFORM|Translation was SUCCESSFUL with 2 warning(s) (0 feature(s) output)
2025-07-10 15:38:27| 1.7| 0.0|INFORM|FME Session Duration: 1.9 seconds. (CPU: 0.9s user, 0.8s system)
2025-07-10 15:38:27| 1.7| 0.0|INFORM|END - ProcessID: 7236, peak process memory usage: 141920 kB, current process memory usage: 128364 kB
Now enable all transformers, leaving only the writers disabled. Re-run the workspace and make a note of the performance:
2025-07-10 15:37:28| 1.9| 0.0|INFORM|Translation was SUCCESSFUL with 2 warning(s) (0 feature(s) output)
2025-07-10 15:37:28| 1.9| 0.0|INFORM|FME Session Duration: 2.3 seconds. (CPU: 1.1s user, 0.8s system)
2025-07-10 15:37:28| 1.9| 0.0|INFORM|END - ProcessID: 63704, peak process memory usage: 173540 kB, current process memory usage: 152356 kB
From this, we can calculate the length of each stage of the workspace:
Do these calculations; you'll need the answers for the quiz.
This course has illustrated basic techniques for designing with performance in mind. To learn more, check out the Optimize Workspace Performance course.